Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Abstract Creativity is increasingly recognized as a core competency for the 21st century, making its development a priority in education, research, and industry. To effectively cultivate creativity, researchers and educators need reliable and accessible assessment tools. Recent software developments have significantly enhanced the administration and scoring of creativity measures; however, existing software often requires expertise in experiment design and computer programming, limiting its accessibility to many educators and researchers. In the current work, we introduce CAP—the Creativity Assessment Platform—a free web application for building creativity assessments, collecting data, and automatically scoring responses (cap.ist.psu.edu). CAP allows users to create custom creativity assessments in ten languages using a simple, point-and-click interface, selecting from tasks such as the Short Story Task, Drawing Task, and Scientific Creative Thinking Test. Users can automatically score task responses using machine learning models trained to match human creativity ratings—with multilingual capabilities, including the new Cross-Lingual Alternate Uses Scoring (CLAUS), a large language model achieving strong prediction of human creativity ratings in ten languages. CAP also provides a centralized dashboard to monitor data collection, score assessments, and automatically generate text for a Methods section based on the study’s tasks, metrics, and instructions—with a single click—promoting transparency and reproducibility in creativity assessment. Designed for ease of use, CAP aims to democratize creativity measurement for researchers, educators, and everyone in between.more » « less
-
ABSTRACT The PISA assessment 2022 of creative thinking was a moonshot effort that introduced significant advancements over existing creativity tests, including a broad range of domains (written, visual, social, and scientific), implementation in many languages, and sophisticated scoring methods. PISA 2022 demonstrated the general feasibility of assessing creative thinking ability comprehensively at an international scale. However, the complexity of its assessment approach—such as time‐consuming scoring requiring human raters—implies the risk that it may not be easily applied by the scientific community and practitioners. In this commentary, we outline important next steps building on the PISA assessment to further enhance future assessments of creative thinking. Crucial future directions include 1) determining what tasks and scorings ensure high psychometric quality including content validity, 2) enabling efficient, objective scoring by applying AI methods such as Large Language Models (LLMs), 3) ensuring high language accessibility via multilingual tests, 4) targeting a broader age group, and 5) facilitating standardized, reproducible assessments via an open online testing platform. In sum, these developments would lead to an efficient, validated multilingual test of creative thinking, which enhances the accessibility of effective creative thinking assessments and thereby supports the democratization and reproducibility of creativity research.more » « less
-
Abstract Complex cognitive processes, like creative thinking, rely on interactions among multiple neurocognitive processes to generate effective and innovative behaviors on demand, for which the brain’s connector hubs play a crucial role. However, the unique contribution of specific hub sets to creative thinking is unknown. Employing three functional magnetic resonance imaging datasets (total N = 1,911), we demonstrate that connector hub sets are organized in a hierarchical manner based on diversity, with “control-default hubs”—which combine regions from the frontoparietal control and default mode networks—positioned at the apex. Specifically, control-default hubs exhibit the most diverse resting-state connectivity profiles and play the most substantial role in facilitating interactions between regions with dissimilar neurocognitive functions, a phenomenon we refer to as “diverse functional interaction”. Critically, we found that the involvement of control-default hubs in facilitating diverse functional interaction robustly relates to creativity, explaining both task-induced functional connectivity changes and individual creative performance. Our findings suggest that control-default hubs drive diverse functional interaction in the brain, enabling complex cognition, including creative thinking. We thus uncover a biologically plausible explanation that further elucidates the widely reported contributions of certain frontoparietal control and default mode network regions in creativity studies.more » « less
-
Abstract The visual modality is central to both reception and expression of human creativity. Creativity assessment paradigms, such as structured drawing tasks Barbot (2018), seek to characterize this key modality of creative ideation. However, visual creativity assessment paradigms often rely on cohorts of expert or naïve raters to gauge the level of creativity of the outputs. This comes at the cost of substantial human investment in both time and labor. To address these issues, recent work has leveraged the power of machine learning techniques to automatically extract creativity scores in the verbal domain (e.g., SemDis; Beaty & Johnson 2021). Yet, a comparably well-vetted solution for the assessment of visual creativity is missing. Here, we introduce AuDrA – an Automated Drawing Assessment platform to extract visual creativity scores from simple drawing productions. Using a collection of line drawings and human creativity ratings, we trained AuDrA and tested its generalizability to untrained drawing sets, raters, and tasks. Across four datasets, nearly 60 raters, and over 13,000 drawings, we found AuDrA scores to be highly correlated with human creativity ratings for new drawings on the same drawing task (r= .65 to .81; mean = .76). Importantly, correlations between AuDrA scores and human raters surpassed those between drawings’ elaboration (i.e., ink on the page) and human creativity raters, suggesting that AuDrA is sensitive to features of drawings beyond simple degree of complexity. We discuss future directions, limitations, and link the trained AuDrA model and a tutorial (https://osf.io/kqn9v/) to enable researchers to efficiently assess new drawings.more » « less
An official website of the United States government
